# Knowledge Distillation Enhancement
Yixin Distill Qwen 72B 4.5bpw H6 Exl2
Apache-2.0
A high-performance mathematical reasoning and general knowledge processing model distilled from Qwen2.5-72B through reinforcement learning, excelling in mathematical reasoning and general knowledge tasks.
Large Language Model Supports Multiple Languages
Y
LoneStriker
37
3
ABEJA Qwen2.5 7b Japanese V0.1
Apache-2.0
A model trained on Japanese based on Qwen/Qwen2.5-7B-Instruct, enhanced through distillation learning to improve instruction-following performance.
Large Language Model
Transformers Japanese

A
abeja
521
6
Unhinged Author 70B
A 70B-parameter large language model merged using the TIES method, based on Steelskull/L3.3-MS-Nevoria-70b and fused with the DeepSeek-R1-Distill-Llama-70B model
Large Language Model
Transformers

U
FiditeNemini
44
3
Harmaug Guard
Apache-2.0
A security protection model fine-tuned based on DeBERTa-v3-large, used to detect unsafe content in conversations with large language models and prevent jailbreak attacks.
Text Classification
Transformers

H
hbseong
705
39
Relullama 7B
A ReLU-activated sparse large language model fine-tuned based on Llama 2 7B, improving computational efficiency through dynamic parameter selection
Large Language Model
Transformers English

R
SparseLLM
5,323
11
Featured Recommended AI Models